搜索排序的数据({\ BF SOSD},简而言之)是一种高度工程化的软件平台,用于基准获得学习索引,后者是通过将机器学习技术与经典组合在一起的方式搜索分类表的新颖且相当有效的提议算法。在这样的平台和相关的基准测试实验中,在自然和直观的选择之后,通过标准(教科书)二进制搜索过程执行最终搜索阶段。然而,最近的研究,不使用机器学习预测,表明统一的二进制搜索,简化以避免主循环中的\ vir {分支},在要搜索的表格中的标准对应物时,性能优异地是相对较小的,例如,在L1或L2缓存中拟合。 k-ary搜索的类似结果即使在大桌子上也是如此。人们期望学习索引中的类似行为。通过一组广泛的实验,与现有技术相干,我们表明,对于学习的索引,并且就{\ bf sosd}软件而言,使用标准例程(二进制或k-ary搜索)在所有内部存储器级别上优​​于均匀的一个。这一事实提供了到目前为止所制作的自然选择的定量理由。我们的实验还表明,统一的二进制和k-ary搜索可能是有利的,以便在学习索引中节省空间,同时授予良好的性能。我们的研究结果是对这种新颖和快速增长的区域的方法有关,以及有兴趣在应用领域中使用学习索引的从业者,例如数据库和搜索引擎。
translated by 谷歌翻译
机器学习技术,与数据结构合并,导致学习静态索引,创新和强大的工具,用于加速二进制搜索,使用其他空间相对于被搜索到的表。这种空间致力于ML模型。虽然在他们的阶段,但由于分类表搜索程序的普遍性,它们在方法上和实际上很重要。在现代应用中,模型空间是一个关键因素,实际上,关于该领域的一个重大开放问题是评估一个人在多大程度上享受学习索引的速度,同时使用常量或几乎恒定的空间模型。我们通过(a)在此处介绍两个新模型,即表示为{\ bf ko-bfs}和{\ bf sy-rmi}; (b)通过系统地探索现有模型的层次结构的时间空间权衡,即{\ bf sosd}中的{\ bf sosd}中的时间表。我们记录了一种新颖且复杂的时空折衷图片,这对用户来说非常丰富。我们通过实验表明{\ bf ko-bfs}可以加快恒定空间中的插值搜索和统一二进制搜索。对于其他版本的二进制搜索,我们的第二种模型以及双标准{\ BF PGM}索引可以实现速度,其模型空间比表所拍摄的0.05 \%$的型号。有竞争力在时间空间与现有建议的权衡方面。 {\ bf sy-rmi}和bi-criteria {\ bf pgm}在内部内存层次结构的各个级别中相互作用。最后,我们的调查结果对设计者感兴趣,因为它们强调了对学习指标中的时空关系的进一步研究的需要。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
Machine Learning (ML) approaches have been used to enhance the detection capabilities of Network Intrusion Detection Systems (NIDSs). Recent work has achieved near-perfect performance by following binary- and multi-class network anomaly detection tasks. Such systems depend on the availability of both (benign and malicious) network data classes during the training phase. However, attack data samples are often challenging to collect in most organisations due to security controls preventing the penetration of known malicious traffic to their networks. Therefore, this paper proposes a Deep One-Class (DOC) classifier for network intrusion detection by only training on benign network data samples. The novel one-class classification architecture consists of a histogram-based deep feed-forward classifier to extract useful network data features and use efficient outlier detection. The DOC classifier has been extensively evaluated using two benchmark NIDS datasets. The results demonstrate its superiority over current state-of-the-art one-class classifiers in terms of detection and false positive rates.
translated by 谷歌翻译
The automation of an increasingly large number of software engineering tasks is becoming possible thanks to Machine Learning (ML). One foundational building block in the application of ML to software artifacts is the representation of these artifacts (e.g., source code or executable code) into a form that is suitable for learning. Many studies have leveraged representation learning, delegating to ML itself the job of automatically devising suitable representations. Yet, in the context of Android problems, existing models are either limited to coarse-grained whole-app level (e.g., apk2vec) or conducted for one specific downstream task (e.g., smali2vec). Our work is part of a new line of research that investigates effective, task-agnostic, and fine-grained universal representations of bytecode to mitigate both of these two limitations. Such representations aim to capture information relevant to various low-level downstream tasks (e.g., at the class-level). We are inspired by the field of Natural Language Processing, where the problem of universal representation was addressed by building Universal Language Models, such as BERT, whose goal is to capture abstract semantic information about sentences, in a way that is reusable for a variety of tasks. We propose DexBERT, a BERT-like Language Model dedicated to representing chunks of DEX bytecode, the main binary format used in Android applications. We empirically assess whether DexBERT is able to model the DEX language and evaluate the suitability of our model in two distinct class-level software engineering tasks: Malicious Code Localization and Defect Prediction. We also experiment with strategies to deal with the problem of catering to apps having vastly different sizes, and we demonstrate one example of using our technique to investigate what information is relevant to a given task.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Recently, transformer networks have outperformed traditional deep neural networks in natural language processing and show a large potential in many computer vision tasks compared to convolutional backbones. In the original transformer, readout tokens are used as designated vectors for aggregating information from other tokens. However, the performance of using readout tokens in a vision transformer is limited. Therefore, we propose a novel fusion strategy to integrate radar data into a dense prediction transformer network by reassembling camera representations with radar representations. Instead of using readout tokens, radar representations contribute additional depth information to a monocular depth estimation model and improve performance. We further investigate different fusion approaches that are commonly used for integrating additional modality in a dense prediction transformer network. The experiments are conducted on the nuScenes dataset, which includes camera images, lidar, and radar data. The results show that our proposed method yields better performance than the commonly used fusion strategies and outperforms existing convolutional depth estimation models that fuse camera images and radar.
translated by 谷歌翻译
我们提出了一种神经网络体系结构,用于糖尿病足溃疡和结肠镜检查息肉的医学图像分割。糖尿病足溃疡是由糖尿病的神经性和血管并发症引起的。为了提供适当的诊断和治疗,伤口护理专业人员需要从脚伤中提取准确的形态特征。使用计算机辅助系统是一种提取相关形态特征并分割病变的有前途的方法。我们提出了一个称为HardNet-DFU的卷积神经网络,通过增强主链并取代HardNet-MSEG的解码器,该网络是2021年的结肠镜检查息肉分割的SOTA。 DFU使用DFUC2022数据集并通过五倍的交叉验证,测试时间扩展等增加其稳健性。在DFUC2022的验证阶段,HardNet-DFUS达到0.7063平均骰子,并在所有参与者中排名第三。在DFUC2022的最终测试阶段,它达到了0.7287的平均骰子,并且是第一名。 HardNet-DFU还为结肠镜检查息肉分割任务提供出色的性能。它在著名的kvasir数据集上达到了0.924的平均骰子,比原始硬核MSEG提高了1.2 \%。这些代码可在https://github.com/kytimmylai/dfuc2022(用于糖尿病足溃疡细分)和https://github.com/yuwenlo/hardnet-dfus(用于结肠镜息肉分割)。
translated by 谷歌翻译
影响功能有效地估计了删除单个训练数据点对模型学习参数的影响。尽管影响估计值与线性模型的剩余重新进行了良好的重新对齐,但最近的作品表明,在神经网络中,这种比对通常很差。在这项工作中,我们通过将其分解为五个单独的术语来研究导致这种差异的特定因素。我们研究每个术语对各种架构和数据集的贡献,以及它们如何随网络宽度和培训时间等因素而变化。尽管实际影响函数估计值可能是非线性网络中保留对方的重新培训的差异,但我们表明它们通常是对不同对象的良好近似值,我们称其为近端Bregman响应函数(PBRF)。由于PBRF仍然可以用来回答许多激励影响功能的问题,例如识别有影响力或标记的示例,因此我们的结果表明,影响功能估计的当前算法比以前的错误分析所暗示的更有用的结果。
translated by 谷歌翻译